当使用基于视觉的方法对被占用和空的空地之间的单个停车位进行分类时,人类专家通常需要注释位置,并标记包含目标停车场中收集的图像的训练集,以微调系统。我们建议研究三种注释类型(多边形,边界框和固定尺寸的正方形),提供停车位的不同数据表示。理由是阐明手工艺注释精度和模型性能之间的最佳权衡。我们还调查了在目标停车场微调预训练型号所需的带注释的停车位数。使用PKLOT数据集使用的实验表明,使用低精度注释(例如固定尺寸的正方形),可以将模型用少于1,000个标记的样品微调到目标停车场。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
In the last decade, exponential data growth supplied machine learning-based algorithms' capacity and enabled their usage in daily-life activities. Additionally, such an improvement is partially explained due to the advent of deep learning techniques, i.e., stacks of simple architectures that end up in more complex models. Although both factors produce outstanding results, they also pose drawbacks regarding the learning process as training complex models over large datasets are expensive and time-consuming. Such a problem is even more evident when dealing with video analysis. Some works have considered transfer learning or domain adaptation, i.e., approaches that map the knowledge from one domain to another, to ease the training burden, yet most of them operate over individual or small blocks of frames. This paper proposes a novel approach to map the knowledge from action recognition to event recognition using an energy-based model, denoted as Spectral Deep Belief Network. Such a model can process all frames simultaneously, carrying spatial and temporal information through the learning process. The experimental results conducted over two public video dataset, the HMDB-51 and the UCF-101, depict the effectiveness of the proposed model and its reduced computational burden when compared to traditional energy-based models, such as Restricted Boltzmann Machines and Deep Belief Networks.
translated by 谷歌翻译
先前的工作表明,深-RL可以应用于无地图导航,包括混合无人驾驶空中水下车辆(Huauvs)的中等过渡。本文介绍了基于最先进的演员批评算法的新方法,以解决Huauv的导航和中型过渡问题。我们表明,具有复发性神经网络的双重评论家Deep-RL可以使用仅范围数据和相对定位来改善Huauvs的导航性能。我们的深-RL方法通过通过不同的模拟场景对学习的扎实概括,实现了更好的导航和过渡能力,表现优于先前的方法。
translated by 谷歌翻译
深钢筋学习中的确定性和随机技术已成为改善运动控制和各种机器人的决策任务的有前途的解决方案。先前的工作表明,这些深-RL算法通常可以应用于一般的移动机器人的无MAP导航。但是,他们倾向于使用简单的传感策略,因为已经证明它们在高维状态空间(例如基于图像的传感的空间)方面的性能不佳。本文在执行移动机器人无地图导航的任务时,对两种深-RL技术 - 深确定性政策梯度(DDPG)和软参与者(SAC)进行了比较分析。我们的目标是通过展示神经网络体系结构如何影响学习本身的贡献,并根据每种方法的航空移动机器人导航的时间和距离提出定量结果。总体而言,我们对六个不同体系结构的分析强调了随机方法(SAC)更好地使用更深的体系结构,而恰恰相反发生在确定性方法(DDPG)中。
translated by 谷歌翻译
ICECUBE是一种用于检测1 GEV和1 PEV之间大气和天体中微子的光学传感器的立方公斤阵列,该阵列已部署1.45 km至2.45 km的南极的冰盖表面以下1.45 km至2.45 km。来自ICE探测器的事件的分类和重建在ICeCube数据分析中起着核心作用。重建和分类事件是一个挑战,这是由于探测器的几何形状,不均匀的散射和冰中光的吸收,并且低于100 GEV的光,每个事件产生的信号光子数量相对较少。为了应对这一挑战,可以将ICECUBE事件表示为点云图形,并将图形神经网络(GNN)作为分类和重建方法。 GNN能够将中微子事件与宇宙射线背景区分开,对不同的中微子事件类型进行分类,并重建沉积的能量,方向和相互作用顶点。基于仿真,我们提供了1-100 GEV能量范围的比较与当前ICECUBE分析中使用的当前最新最大似然技术,包括已知系统不确定性的影响。对于中微子事件分类,与当前的IceCube方法相比,GNN以固定的假阳性速率(FPR)提高了信号效率的18%。另外,GNN在固定信号效率下将FPR的降低超过8(低于半百分比)。对于能源,方向和相互作用顶点的重建,与当前最大似然技术相比,分辨率平均提高了13%-20%。当在GPU上运行时,GNN能够以几乎是2.7 kHz的中位数ICECUBE触发速率的速率处理ICECUBE事件,这打开了在在线搜索瞬态事件中使用低能量中微子的可能性。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
Delimiting salt inclusions from migrated images is a time-consuming activity that relies on highly human-curated analysis and is subject to interpretation errors or limitations of the methods available. We propose to use migrated images produced from an inaccurate velocity model (with a reasonable approximation of sediment velocity, but without salt inclusions) to predict the correct salt inclusions shape using a Convolutional Neural Network (CNN). Our approach relies on subsurface Common Image Gathers to focus the sediments' reflections around the zero offset and to spread the energy of salt reflections over large offsets. Using synthetic data, we trained a U-Net to use common-offset subsurface images as input channels for the CNN and the correct salt-masks as network output. The network learned to predict the salt inclusions masks with high accuracy; moreover, it also performed well when applied to synthetic benchmark data sets that were not previously introduced. Our training process tuned the U-Net to successfully learn the shape of complex salt bodies from partially focused subsurface offset images.
translated by 谷歌翻译
放射线学使用定量医学成像特征来预测临床结果。目前,在新的临床应用中,必须通过启发式试验和纠正过程手动完成各种可用选项的最佳放射组方法。在这项研究中,我们提出了一个框架,以自动优化每个应用程序的放射线工作流程的构建。为此,我们将放射线学作为模块化工作流程,并为每个组件包含大量的常见算法。为了优化每个应用程序的工作流程,我们使用随机搜索和结合使用自动化机器学习。我们在十二个不同的临床应用中评估我们的方法,从而在曲线下导致以下区域:1)脂肪肉瘤(0.83); 2)脱粘型纤维瘤病(0.82); 3)原发性肝肿瘤(0.80); 4)胃肠道肿瘤(0.77); 5)结直肠肝转移(0.61); 6)黑色素瘤转移(0.45); 7)肝细胞癌(0.75); 8)肠系膜纤维化(0.80); 9)前列腺癌(0.72); 10)神经胶质瘤(0.71); 11)阿尔茨海默氏病(0.87);和12)头颈癌(0.84)。我们表明,我们的框架具有比较人类专家的竞争性能,优于放射线基线,并且表现相似或优于贝叶斯优化和更高级的合奏方法。最后,我们的方法完全自动优化了放射线工作流的构建,从而简化了在新应用程序中对放射线生物标志物的搜索。为了促进可重复性和未来的研究,我们公开发布了六个数据集,框架的软件实施以及重现这项研究的代码。
translated by 谷歌翻译
我们描述了作为黑暗机器倡议和LES Houches 2019年物理学研讨会进行的数据挑战的结果。挑战的目标是使用无监督机器学习算法检测LHC新物理学的信号。首先,我们提出了如何实现异常分数以在LHC搜索中定义独立于模型的信号区域。我们定义并描述了一个大型基准数据集,由> 10亿美元的Muton-Proton碰撞,其中包含> 10亿美元的模拟LHC事件组成。然后,我们在数据挑战的背景下审查了各种异常检测和密度估计算法,我们在一组现实分析环境中测量了它们的性能。我们绘制了一些有用的结论,可以帮助开发无监督的新物理搜索在LHC的第三次运行期间,并为我们的基准数据集提供用于HTTPS://www.phenomldata.org的未来研究。重现分析的代码在https://github.com/bostdiek/darkmachines-unsupervisedChallenge提供。
translated by 谷歌翻译